Shares

It really makes our job more challenging when otherwise reasonable medical journals publish pseudoscientific nonsense. The European Journal of Rheumatology is considered a respected peer-reviewed journal in the field. I have to wonder if the editors were completely asleep at the switch with this one, or if they may be harboring one or more true-believers (homeopathy remains popular in some corners of Europe).

I was recently asked for my opinion about this shockingly horrible systematic review of homeopathic potions for rheumatological disease. These kinds of studies make the rounds in social media, are gullibly reported in the media, and simply confuse the public about what counts as good science. If there’s a silver lining, it is a good teaching moment, to flex our SBM skills and dissect the many problems with this study.

I suspect English may not be the primary language for the authors, but how did the editors let phrases like this get by, “Homeopathy has mainly been used to treat several diseases. On the other hand, it has been used in a few rheumatic disorders.” This is not the main problem with the study – I point it out only as a marker for what I consider to be sloppy, even neglectful, editing. The abstract continues:

“PubMed and Embase databases were examined for literature on homeopathy and RDs between 1966 and April 2023. There are 15 articles found with 811 patients. The diseases treated were osteoarthritis (n=3), followed by rheumatoid arthritis (n=3), ankylosing spondylitis (n=1), hyperuricemia (n=1), and tendinopathy (n=1)…Most studies (9/15) demonstrated improvements after homeopathy. Side effects were not seen or minimal and were comparable to placebo groups. In conclusion, this review shows homeopathy is a promising and safe therapy for RD treatment. However, the data needs to be reproduced in future more extensive studies, including other rheumatic conditions.”

We see the common caveat – more study is needed – which is often code for “the evidence is basically negative”. But worse, the abstract generates more questions than it answers. Let’s review some basic SBM principles for what makes a good systematic review. Was there any evidence of publication bias? We any of these studies pre-registered? Was there any relationship between the rigor of the study and the outcome? Was there any heterogeneity in the study results? Don’t expect any of these questions to be answered.

Further, the number of studies is ridiculously low, especially for a review covering over 50 years. They found 15 articles, covering five different diseases, with each disease having 1-3 studies. This is extremely thin. Plus, it is absurd to conflate these studies – they should have analyzed each disease separately. Worse still. each study used a different assortment of homeopathic treatments, but they are treating them all as one thing, homeopathy. What does that mean? Does this mean all homeopathic treatments are equivalent? It doesn’t matter which specific treatment you choose? Saying that “homeopathy works” is like saying “medication works”. Which medication, at what dose, for which diseases?

But of course my primary question after reading the abstract and introduction was – what was the quality of these studies, and were the outcomes really positive? Well, let’s take a look. Of the 15 studies, 10 indicate that they were double-blind controlled trials.

1 – Andrade et al 1991, review claims this study showed improvement in homeopathy over placebo (without commenting on statistical significance), but the study reports: “There was no statistically significant difference between groups.”

2 – Gibson et al 1980, review claims a positive result but this is a small study, 20 patients in each group, and contains this curious note: “AM made two errors in assessing his patients on active homoeopathy and two in assessing his placebo patients, while RG made one error in assessing his patients on active therapy and four in assessing his placebo patients. The patients on active therapy who were assessed wrongly by both physicians had not improved and had therefore been assessed as being on placebo.” So both evaluators wrongly assumed that placebo patients with improvement were on active treatment and that non-responders on active treatment were on placebo – all errors in the direction of making the treatment seem effective.

3 – Koley at al 2015 – negative study.

4 – Haselen et al 2000 – no difference between homeopathic gel and analgesic gel, but no placebo arm.

5 – Shipley et al 1980 – negative study.

6 – Bell at al 2004 – statistically significant results, but high drop-out rate – 9 out of 63, which call the results into question.

7 – Shirmer at all 2000 – negative study

8 – Janczewska at 2023 – review reports as positive, but the three treatment groups (EM and LED therapy alone, homeopathic gel alone, and both combined) showed no significant difference.

9 – Fisher et al 1989 – reported as positive, but very small study (30 patients total) and not all outcomes were significant.

10 – Relton at al 2008 – reported as positive, but also had a high dropout rate, “Drop out rate in the usual care group was higher than the homeopath care group (8/24 vs 3/23).”

So of the 10 studies that were blinded and controlled, six were straight-up negative. Of the four with positive results, all were very small studies (essentially pilot studies) with some highly problematic features. Non-blinded studies with subjective outcomes are essentially worthless in terms of determining efficacy. Overall this is the pattern we expect to see for a treatment that does not work – small preliminary studies showing mixed results, with better studies that are mostly negative.

The review authors are simply wrong in saying that “most” of the studies were positive, and they failed to consider the relationship between the quality of the study and the outcome. They authors believe these results are encouraging, but they are the opposite.

It is telling how few studies have been down over five decades. With legitimate medical interventions, we generally see a building of studies in the literature, with increasing quality and rigor, larger studies, more definitive designs, and replications, until we build and evidence-base that allows us to reject then null hypothesis and conclude a treatment is efficacious or fail to do so and abandon the treatment as not effective. But with homeopathy, we never see this.

We never see any specific treatment for any specific condition where we have multiple high quality studies showing replicating a clinically significant and statistically significant effect. There isn’t a single indication for which any homeopathic product or treatment has been shown to be effective. Instead what we get is what we see in this review – a scattershot of poorly designed small studies with unreliable results. Studies of any quality tend to be negative. But there is no replicable effect here.

What we also see, which we should not, is gullible authors publishing horrible reviews that ignore standard practice and misrepresent the evidence-base, along with editors from an otherwise respected peer-reviewed journal publishing such terrible work.

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

    View all posts

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.